-
→ everyone wants their SR algos to predict retention. I want one that makes me learn the fastest.
- include learn rate and ⌣ cost per trial
- → ◊ Spaced Repetition
-
bad incentives:
- imagine you randomly prompt users to make up a mnemonic and say that is a coin flip, it either doesn't work or gives 100% retention forever — universal metric would punish you for this, right?
- more specific example: If you have two similar cards in your decks, essentially causing interference/Malapromisms, do you trigger an leech prompt that may make it better or worse?
- also, it incentives you (kind of) to build NN algos based on existing/old data, instead of outdoing them
- funnily enough, Anki would probably improve if they wouldn't remove leeches from active learning (because predicting them is easy)
- imagine you randomly prompt users to make up a mnemonic and say that is a coin flip, it either doesn't work or gives 100% retention forever — universal metric would punish you for this, right?
-
expression of I'm just doing the algo, you do the ⌣ user experience
- exemplified in the ebisu doc/discussion, if you want to go there
-
this whole thing is neighboring ~ the standard button labels in spaced repetition are unscientific and likely terrible